Compare Page

Interpretability

Characteristic Name: Interpretability
Dimension: Usability and Interpretability
Description: Data should be interpretable
Granularity: Information object
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to the lack of interpretability of data
The number of complaints received due to the lack of interpretability of data

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Standardise the interpretation process by clearly stating the criteria for interpreting results so that an interpretation on one dataset is reproducible (1) 10% drop in production efficiency is a severe decline which needs quick remedial actions
Facilitate the interaction process based on users' task at hand (1) A traffic light system to indicate the efficiency of a production line to the workers, a detail efficiency report to the production manage, a concise efficiency report for production line supervisors
Design the structure of information in such a way that further format conversions are not necessary for interpretations. (1) A rating scale of (poor good excellent ) is better than (1,2,3) for rate a service level
Ensure that information is consistent between units of analysis (organisations, geographical areas, populations in concern etc.) and over time, allowing comparisons to be made. (1) Number of doctors per person is used to compare the health facilities between regions.
(2) Same populations are used over the time to analyse the epidemic growths over the tim
Use appropriate visualisation tools to facilitate interpretation of data through comparisons and contrasts (1) Usage of tree maps , Usage of bar charts, Usage of line graphs

Validation Metric:

How mature is the process to maintain the interpretability of data

These are examples of how the characteristic might occur in a database.

Example: Source:
when an analyst has data with freshness metric equals to 0, does it mean to have fresh data at hand? What about freshness equals to 10 (suppose, we do not stick to the notion proposed in [23])? Is it even fresher? Similar issues may arise with the notion of age: e.g., with age A(e) = 0, we cannot undoubtedly speak about positive or negative data characteristic because of a semantic meaning of “age” that mostly corresponds to a neutral notion of “period of time” O. Chayka, T. Palpanas, and P. Bouquet, “Defining and Measuring Data-Driven Quality Dimension of Staleness”, Trento: University of Trento, Technical Report # DISI-12-016, 2012.
Consider a database containing orders from customers. A practice for handling complaints and returns is to create an “adjustment” order for backing out the original order and then writing a new order for the corrected information if applicable. This procedure assigns new order numbers to the adjustment and replacement orders. For the accounting department, this is a high-quality database. All of the numbers come out in the wash. For a business analyst trying to determine trends in growth of orders by region, this is a poor-quality database. If the business analyst assumes that each order number represents a distinct order, his analysis will be all wrong. Someone needs to explain the practice and the methods necessary to unravel the data to get to the real numbers (if that is even possible after the fact). J. E. Olson, “Data Quality: The Accuracy Dimension”, Morgan Kaufmann Publishers, 9 January 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Comparability of data refers to the extent to which data is consistent between organisations and over time allowing comparisons to be made. This includes using equivalent reporting periods. HIQA 2011. International Review of Data Quality Health Information and Quality Authority (HIQA), Ireland. http://www.hiqa.ie/press-release/2011-04-28-international-review-data-quality.
Data is not ambiguous if it allows only one interpretation – anti-example: Song.composer = ‘Johann Strauss’ (father or son?). KIMBALL, R. & CASERTA, J. 2004. The data warehouse ETL toolkit: practical techniques for extracting. Cleaning, Conforming, and Delivering, Digitized Format, originally published.
Comparability aims at measuring the impact of differences in applied statistical concepts and measurement tools/procedures when statistics are compared between geographical areas, non-geographical domains, or over time. LYON, M. 2008. Assessing Data Quality ,
Monetary and Financial Statistics.
Bank of England. http://www.bankofengland.co.uk/
statistics/Documents/ms/articles/art1mar08.pdf.
The most important quality characteristic of a format is its appropriateness. One format is more appropriate than another if it is better suited to users’ needs. The appropriateness of the format depends upon two factors: user and medium used. Both are of crucial importance. The abilities of human users and computers to understand data in different formats are vastly different. For example, the human eye is not very good at interpreting some positional formats, such as bar codes, although optical scanning devices are. On the other hand, humans can assimilate much data from a graph, a format that is relatively hard for a computer to interpret. Appropriateness is related to the second quality dimension, interpretability. REDMAN, T. C. 1997. Data quality for the information age, Artech House, Inc.

 

Accuracy to reference source

Characteristic Name: Accuracy to reference source
Dimension: Accuracy
Description: Data should agree with an identified source
Granularity: Element
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to lack of accuracy to reference sources
The number of complaints received due to lack of accuracy to reference sources

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Establish the source for a data attribute and maintain facilities to access the correct source. (1) Actual Cost of raw material is taken from Supplier invoices and not from quotation.
(2) Interest rates are taken from daily central bank statistics which is available in the finance system online.
Establish the data capturing points in the business process without leading to any ambiguity and enforce process level validation mechanisms to ensure the process is being followed. (1) Personal drug utilisation data is captured at POS units at pharmacies and ALL pharmacies in the country are connected to a central system (All pharmacy data is considered). (2) In a barcode scanning system in a production system, finished products cannot be scanned into quality checked products (Finished, Quality checked are the two data capturing points here)
Implement effective techniques and efficient technological solutions (devices) in collecting data which minimise data errors and omissions in data capturing. (1) Barcode scanning is used to enter sales of products. (2) Invoices are scanned into the system and price is automatically recognised. (3) Standard forms are used to collect patient data.
If data is collected and transferred batch wise, establish the frequencies of data transfers/uploads considering the nature of the data and business needs. (1) All drug utilisation data collected in the pharmacies are transferred to the central system at the end of every month.
(2) Production efficiency data is transferred to monitoring systems every 30 minutes
Implement an effective and efficient data transferring technology which do not cause distortions or omissions to data (1) Data migration tools
Define and implement appropriate input validation rules to notify the data collector/operator about the erroneous values being entered, avoid erroneous values being entered into database or erroneous values are flagged for clear identification (1) Telephone number does not accept non numeric characters
Implement flexible data capturing interfaces to accommodate important but out of the way data. (1) A field exists to record special comments in a goods receipts note (GRN)
Implement and enforce standardised data capturing procedures/ best practices through the system in collecting data. (1) Standard wait times are used in taking blood samples of a patient.
(e.g.: one hour after meal)
Establish mitigation mechanisms to handle measurement errors and ensure that acceptable error tolerance levels are established (1) calibrate the equipments on a routine basis
Identify barriers for data collection or barriers for data providers and take appropriate actions to remove them (1) Maintain a log file of response failures of a web based survey and then eliminate the root causes.
Identify the practices which encourage data providers (1) Reward survey participants
Conduct regular training programs for data capturing/entering staff and educate them on possible data capturing problems and how to overcome data entry errors depending on the context (1) Do not restart the Scanner when it is hung up while scanning
(2) Repeat a telephone number in a different pattern to validate it from the source e.g. : 045 220 371 9 , in validating repeat it as 04 52 20 37 19

Validation Metric:

How mature is the process for ensuring accuracy for reference sources

These are examples of how the characteristic might occur in a database.

Example: Source:
In this scenario, the parent, a US Citizen, applying to a European school completes the Date of Birth (D.O.B) on the application form in the US date format, MM/DD/YYYY rather than the European DD/MM/YYYY format, causing the representation of days and months to be reversed. N. Askham, et al., “The Six Primary Dimensions for Data Quality Assessment: Defining Data Quality Dimensions”, DAMA UK Working Group, 2013.
, let us consider two

databases, say A and B, that contain the same data. If at time t a user updates data in database A and another user reads the same data from database B at time t' (t < t' ), the latter will read incorrect data. If t and f are included within the time interval between two subsequent data realignments

C. Cappiello, C. Francalanci, and B. Pernici, “Time-Related Factors of Data Quality in Multichannel Information System” in Journal of Management Information Systems, Vol. 20, No. 3, M.E. Sharpe, Inc., 2004, pp.71-91.
Consider an air traffic control center which receives data from several controller stations. To regulate air traffic, the traffic control center has to cope with uncertain data.Thus, the decision process must balance the delaying receiving more accurate data of airplane positions and the critical period of time in which an“effective” decision must be made to regulate traffic; B. Pernici, “Advanced Information Systems Engineering” in proc. The 22nd International Conference, CAiSE, Hammamet, Tunisia, June 2010.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
A measure of the correctness of the content of the data (which requires an authoritative source of reference to be identified and accessible). D. McGilvray, “Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information”, Morgan Kaufmann Publishers, 2008.
The data agrees with an original, corroborative source record of data, such as a notarized birth certificate, document, or unaltered electronic data received from a party outside the control of the organization that is demonstrated to be a reliable source. ENGLISH, L. P. 2009. Information quality applied: Best practices for improving business information, processes and systems, Wiley Publishing.
1) Accuracy of data refers to how closely the data correctly captures what it was designed to capture. Verification of accuracy involves comparing the collected data to an external reference source that is known to be valid. Capturing data as close as possible to the point of activity contributes to accuracy. The need for accuracy must be balanced with the importance of the decisions that will be made based on the data and the cost and effort associated with data collection. If data accuracy is compromised in any way then this information should be made known to the data users.

2) Reliability of data refers to the extent to which data is collected consistently over time and by different organisations either manually or electronically.

HIQA 2011. International Review of Data Quality Health Information and Quality Authority (HIQA), Ireland. http://www.hiqa.ie/press-release/2011-04-28-international-review-data-quality.
Data accuracy refers to the degree with which data values agree with an identified source of correct information. There are different sources of correct information: database of record, a similar, corroborative set of data values from another table, dynamically computed values, the result of a manual workflow, or irate customers. LOSHIN, D. 2001. Enterprise knowledge management: The data quality approach, Morgan Kaufmann Pub.
Data accuracy refers to the degree with which data correctly represents the “real-life” objects they are intended to model. In many cases, accuracy is measured by how the values agree with an identified source of correct information (such as reference data). There are different sources of correct information: a database of record, a similar corroborative set of data values from another table, dynamically computed values, or perhaps the result of a manual process. LOSHIN, D. 2006. Monitoring Data quality Performance using Data Quality Metrics. Informatica Corporation.
Accuracy of datum refers the nearness of the value v to some value v’ in the attribute domain, which is considered as the (or maybe only a) correct one for the entity e and the attribute a. In some cases, v’ is referred to as the standard. If the datum’s value v coincides value v’, the datum is said to be correct. REDMAN, T. C. 1997. Data quality for the information age, Artech House, Inc.
Degree of correctness of a value when comparing with a reference one STVILIA, B., GASSER, L., TWIDALE, M. B. & SMITH, L. C. 2007. A framework for information quality assessment. Journal of the American Society for Information Science and Technology, 58, 1720-1733.
The extent to which data are correct reliable and certified free of error. WANG, R. Y. & STRONG, D. M. 1996. Beyond accuracy: What data quality means to data consumers. Journal of management information systems, 5-33.